Skip to content

Conversation

@oscarandersson8218
Copy link
Collaborator

@oscarandersson8218 oscarandersson8218 commented Oct 21, 2025

Use the Node meta 'custom' field to propagate information from quantizer to partitioner using a new ArmAnnotationInfo data class. This allows us to track quantized node reliably which is useful in order to track which nodes should 'fold' it's quantization parameter and which should be kept in fp when mixing integer and float in a sub-graph.

cc @freddan80 @per @zingo @digantdesai

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 21, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15300

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (6 Unrelated Failures)

As of commit 5e375e5 with merge base c8e9684 (image):

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 21, 2025
@oscarandersson8218 oscarandersson8218 added partner: arm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Arm ciflow/trunk release notes: none Do not include this in the release notes and removed CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. labels Oct 21, 2025
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 21, 2025
node.meta[Q_ANNOTATION_KEY]._annotated = True
meta_custom = node.meta.get("custom", {})
meta_custom[ArmAnnotationInfo.CUSTOM_META_KEY] = annotation_info
node.meta["custom"] = meta_custom
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@digantdesai What are your thoughts on this conceptually?

@AdrianLundell
Copy link
Collaborator

@digantdesai I have a feeling the MCU backend will need something similar to this since mixed int/fp graphs will likely be more common there.

@zingo
Copy link
Collaborator

zingo commented Nov 6, 2025

Adding @SS-JIA as @digantdesai is out of office.

@oscarandersson8218 oscarandersson8218 marked this pull request as ready for review November 6, 2025 08:13
@SS-JIA
Copy link
Contributor

SS-JIA commented Nov 7, 2025

Apologies for the delay, I will take a look at this PR tomorrow

AdrianLundell added a commit to AdrianLundell/executorch that referenced this pull request Nov 7, 2025
A number of ops only handles shape/meta-data without changing
the dynamic range. In these cases, no rescaling needs to be performed
and the int8 portable_ops kernel can be used directly.

A new test is added to ensure this behaviour, as well as a test showing
how operators which does change the dynamic range (SUB) are not
supported.

To support quantization of graphs with no-rescale ops in the beginning/
end of the graph, two new quantizers InputQuantizer and OutputQuantizer
are introduced. By explicitly stating the dtype of the input/output,
no-rescale ops inherit dtypes from them as with any other op.

This change exposes the issue of mixing dtypes within the graph,
which adds back xfails for the broadcasted add and mul tests.
This can be fixed in a future patch after
pytorch#15300 is resolved.

Signed-off-by: Adrian Lundell <[email protected]>
Change-Id: I8f79b86b633f9ad8d9f183c914754b0ee2f7a87c
Copy link
Contributor

@SS-JIA SS-JIA left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Use the Node meta 'custom' field to propagate information from quantizer
to partitioner using a new ArmAnnotationInfo data class. This  allows us
to track quantized node reliably which is useful in order to track which
nodes should 'fold' it's quantization parameter and which should be kept
in fp when mixing integer and float in a sub-graph.

Co-authored-by: Per Åstrand <[email protected]>
Signed-off-by: Oscar Andersson <[email protected]>
Change-Id: I31309d65cac50e497318eae8678880684ec77cda
@Sebastian-Larsson Sebastian-Larsson merged commit 6550a37 into pytorch:main Nov 11, 2025
290 of 296 checks passed
SS-JIA added a commit that referenced this pull request Nov 11, 2025
@SS-JIA
Copy link
Contributor

SS-JIA commented Nov 11, 2025

@oscarandersson8218 unfortunately, this diff is causing a lot of failures internally, and we will have to revert. Sorry about that, I should have imported the changes first to make sure everything is ok.

There were two types of failures -

First, BUCK build failures which I was able to fix with #15759

The second is a bit trickier. Turns out we have some internal tests that try to serialize the exported model via

torch.export.save(exported_model, file)

When this happens, we see the error

 Original exception Traceback (most recent call last):
  File "/data/sandcastle/boxes/trunk-hg-full-fbsource/buck-out/v2/gen/fbcode/674df896a4e9a928/modai/test/__test_ethos_emg_recipes__/test_ethos_emg_recipes#link-tree/torch/_export/serde/serialize.py", line 954, in serialize_metadata
    ret["custom"] = json.dumps(custom)
                    ^^^^^^^^^^^^^^^^^^
  File "/usr/local/fbcode/platform010/lib/python3.12/json/__init__.py", line 231, in dumps
    return _default_encoder.encode(obj)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/fbcode/platform010/lib/python3.12/json/encoder.py", line 200, in encode
    chunks = self.iterencode(o, _one_shot=True)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/fbcode/platform010/lib/python3.12/json/encoder.py", line 258, in iterencode
    return _iterencode(o, 0)
           ^^^^^^^^^^^^^^^^^
  File "/usr/local/fbcode/platform010/lib/python3.12/json/encoder.py", line 180, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type ArmAnnotationInfo is not JSON serializable

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/data/sandcastle/boxes/trunk-hg-full-fbsource/buck-out/v2/gen/fbcode/674df896a4e9a928/modai/test/__test_ethos_emg_recipes__/test_ethos_emg_recipes#link-tree/torch/_export/serde/serialize.py", line 1826, in serialize_graph
    getattr(self, f"handle_{node.op}")(node)
  File "/data/sandcastle/boxes/trunk-hg-full-fbsource/buck-out/v2/gen/fbcode/674df896a4e9a928/modai/test/__test_ethos_emg_recipes__/test_ethos_emg_recipes#link-tree/torch/_export/serde/serialize.py", line 750, in handle_call_function
    metadata=self.serialize_metadata(node),
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/sandcastle/boxes/trunk-hg-full-fbsource/buck-out/v2/gen/fbcode/674df896a4e9a928/modai/test/__test_ethos_emg_recipes__/test_ethos_emg_recipes#link-tree/torch/_export/serde/serialize.py", line 956, in serialize_metadata
    raise SerializeError(
torch._export.serde.serialize.SerializeError: Failed to serialize custom metadata for node cat with error Object of type ArmAnnotationInfo is not JSON serializable

I believe the easiest fix for this would be to make ArmAnnotationInfo JSON serializable.

Would you mind submitting a new PR with these changes? For the new one I will be sure to import the changes internally first.

@oscarandersson8218
Copy link
Collaborator Author

@SS-JIA Re-uploaded the commit in #15778.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. partner: arm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Arm release notes: none Do not include this in the release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants